reasoning learner
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
A Proofs Lemma 1. For the mixed imaged opponent policy (IOP) π
According to Bayes' theorem, as we update the posterior probability as The changing trends of α are diverse when against different opponents. IOP to accurately model the opponent policy. Figure 7: Performance against different types of opponents, i.e., fixed policy, naïve learner, and Figure 8: Performance against different types of opponents, i.e., fixed policy, naïve learner, and Note that M = 1 is MBOM w/o IOPs. Figure 9: Performance against different types of opponents, i.e., fixed policy, naïve learner, and reasoning learner in Predator-Prey, where x -axis is joint opponent index. Figure 9 shows the performance when against different types of opponents compared with the baselines. For each type, there are ten test joint opponent policies.
- Asia > Middle East > Jordan (0.04)
- Asia > China (0.04)
Model-Based Opponent Modeling
Yu, Xiaopeng, Jiang, Jiechuan, Jiang, Haobin, Lu, Zongqing
When one agent interacts with a multi-agent environment, it is challenging to deal with various opponents unseen before. Modeling the behaviors, goals, or beliefs of opponents could help the agent adjust its policy to adapt to different opponents. In addition, it is also important to consider opponents who are learning simultaneously or capable of reasoning. However, existing work usually tackles only one of the aforementioned types of opponent. In this paper, we propose model-based opponent modeling (MBOM), which employs the environment model to adapt to all kinds of opponent. MBOM simulates the recursive reasoning process in the environment model and imagines a set of improving opponent policies. To effectively and accurately represent the opponent policy, MBOM further mixes the imagined opponent policies according to the similarity with the real behaviors of opponents. Empirically, we show that MBOM achieves more effective adaptation than existing methods in competitive and cooperative environments, respectively with different types of opponent, i.e., fixed policy, na\"ive learner, and reasoning learner.
- Asia > Middle East > Jordan (0.04)
- Africa > Cameroon > Gulf of Guinea (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.97)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.34)